Skip to content

Conversation

GroophyLifefor
Copy link
Member

Performance Runner Proposal

Docker-based performance testing framework for JavaScript packages with CI integration and multi-Node.js version support.

Summary

Containerized test runner that provides:

  • Consistent testing environments across Node.js versions (18-24)
  • Automated CI/CD workflows with PR commenting
  • Template system for common performance testing patterns
  • Standardized result formatting

Quick Setup

  1. Add CI workflow using the Docker image
  2. Create tests in expf-tests/ directory
  3. Use provided templates (autocannon example included)

For detailed one, read /perf-runner/readme.md.

Full Details

See the complete problem statement and technical specifications in Issue #38.

@GroophyLifefor GroophyLifefor requested a review from a team August 13, 2025 15:59
@GroophyLifefor GroophyLifefor self-assigned this Aug 13, 2025
@GroophyLifefor GroophyLifefor added the enhancement New feature or request label Aug 13, 2025
@GroophyLifefor GroophyLifefor linked an issue Aug 13, 2025 that may be closed by this pull request
Copy link
Member

@wesleytodd wesleytodd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome to see this work. I think a lot of this would be great as incremental PRs into the existing CLI setup we have merged. Are you up for that? Are the windows issues from #45 blocking you? Would we be able to do the smaller incremental PRs for that to move that forward? Or is it something else blocking integrating this work into the existing CLI?

@@ -0,0 +1,24 @@
FROM debian:bullseye-slim
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would take a look at what the existing docker runner is doing. It uses the official node.js docker images. I think we want to stick to using supported images from either node.js, GH, or our partners like NodeSource.

https://github.com/expressjs/perf-wg/blob/main/packages/runner-docker/Dockerfile

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, also this can be a optimization that I'm not sure but valid point.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should optimize system things, we should use what our users would use with the goal of being "close to the platform". That way we may catch perf issues from common configurations before users do (or at least can easily test them).

@@ -0,0 +1,62 @@
import { config, validateConfig } from './src/config.mjs';
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a good start for a runner-local package. Maybe take a look at what I did here and follow that pattern? https://github.com/expressjs/perf-wg/blob/main/packages/runner-docker/index.mjs

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Answered in PR Comment: #43 (comment)

/**
* Post a comment to a GitHub PR
*/
export async function postComment(message) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we are using GHA we can just rely on their PR checks instead of making comments right? They even have an html reports: https://docs.github.com/en/actions/reference/workflows-and-actions/workflow-commands#adding-a-job-summary

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't know that, I'll look into it, thank you.

@@ -0,0 +1,137 @@
import { readFileSync } from 'fs';
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be awesome to see this as an incremental addition to the existing compare command. Especially awesome to see it output the node version and other things you have in here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Answered in PR Comment: #43 (comment)

@GroophyLifefor
Copy link
Member Author

Awesome to see this work. I think a lot of this would be great as incremental PRs into the existing CLI setup we have merged. Are you up for that? Are the windows issues from #45 blocking you? Would we be able to do the smaller incremental PRs for that to move that forward? Or is it something else blocking integrating this work into the existing CLI?

  • First of all, you are reading my mind.
  1. This is how I see it: I think what you did was great, but it was a heavy introduction, and then I made a big mistake by complicating things further under the cover of Windows support.
  2. Then I made CI based perf-runner because, yeah, if you look the PR pretty less code with more feature.
  3. BUT just because I presented an approach, putting everything aside and moving on to the structure I had established caused contradictions between what was reasonable and what was right. And so I started thinking about how we could combine the two approaches.

Right now, I have no idea, sorry 😄 , but mine target is simply:

  • isolated, easier to maintain, splitable to repositories, versioning(as Dockerimage supports)

And your target is simply:

  • local-first, cli-based, everything on perf repo

The base conflict I see

  • mine: splitable to repositories
  • your: everything on perf repo

I'm thing to create a testing folder for repo spesific testings rather than perf-wg repository setup.

Whose can be combined

  • ci-based and cli-based: Currenly my approach is only-CI solution but since I may can't do cli-based, I may can do cli-supportive.
  • ci-based and local-first : As above, if we do cli-supportive, also can be local performance test support.

@wesleytodd
Copy link
Member

local-first, cli-based, everything on perf repo

Yes, I did this for what I think are good reasons:

  1. local-first: because we need to be able to iterate on things fast and it is much easier to take a cli/local setup and call that from CI than to design a CI system and add in local runs later.
  2. cli-based: in addition to number 1, being cli based means we can much more easily give contributors instructions on how to run their own quick/one-off perf tests for PRs across the org. If we didn't have this, we would need folks to setup their forks to run CI and depending on the type of change would require forking other unnecessary repos (for example, if the change was in router but the tests target express).
  3. everything on perf repo: I covered this a bit in Types of Performance Tests #12 & Standardized formats, locations, and data contracts #19, but for now having it all here helps us keep things organized. In the end, different types of tests would live in their project repos (see Types of Performance Tests #12 & Standardized formats, locations, and data contracts #19). Until we have all agreed on the formats and requirements though, I think multi-repo would really make it hard for collaboration.

Does number 3 help resolve this conflict for you? I was treating the tests in this repo as starting examples that we could move out once we are sure they are the right ones. That way, prevent having work happening all over the org's ~40 repos that we may have to update if we make core tooling changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

Isolated and Consistent Performance Testing in CI
3 participants